33 research outputs found

    A Robot-Sensor Network Security Architecture for Monitoring Applications

    Get PDF
    This paper presents SNSR (Sensor Network Security using Robots), a novel, open, and flexible architecture that improves security in static sensor networks by benefiting from robot-sensor network cooperation. In SNSR, the robot performs sensor node authentication and radio-based localization (enabling centralized topology computation and route establishment) and directly interacts with nodes to send them configurations or receive status and anomaly reports without intermediaries. SNSR operation is divided into stages set in a feedback iterative structure, which enables repeating the execution of stages to adapt to changes, respond to attacks, or detect and correct errors. By exploiting the robot capabilities, SNSR provides high security levels and adaptability without requiring complex mechanisms. This paper presents SNSR, analyzes its security against common attacks, and experimentally validates its performance

    LIDAR-based detection of furrows for agricultural robot autonomous navigation

    Get PDF
    [Abstract] Robust and accurate autonomous navigation is a main challenge in agricultural robotics. This paper presents a LIDAR-based processing system for autonomous robot navigation in crops with high vegetation density. The method detects and locates the crop furrows and provides them to the robot control system, which guides the robot such that its caterpillar tracks move along the furrows preventing damages in the crop. The proposed LIDAR-based processing pipeline includes various inconsistencies removal and template matching steps to deal with the high noise level of LIDAR scans. It has been implemented in C++ using ROS Noetic and validated in two different plantations with different crop growth status

    An Adaptive Scheme for Robot Localization and Mapping with Dynamically Configurable Inter-Beacon Range Measurements

    Get PDF
    This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption.Unión Europea ICT-2011-288082CLEAR (DPI2011-28937-C02-01)Ministerio de Educación y Deporte

    ASAP: Adaptive Scheme for Asynchronous Processing of Event-based Vision Algorithms

    Full text link
    Event cameras can capture pixel-level illumination changes with very high temporal resolution and dynamic range. They have received increasing research interest due to their robustness to lighting conditions and motion blur. Two main approaches exist in the literature to feed the event-based processing algorithms: packaging the triggered events in event packages and sending them one-by-one as single events. These approaches suffer limitations from either processing overflow or lack of responsivity. Processing overflow is caused by high event generation rates when the algorithm cannot process all the events in real-time. Conversely, lack of responsivity happens in cases of low event generation rates when the event packages are sent at too low frequencies. This paper presents ASAP, an adaptive scheme to manage the event stream through variable-size packages that accommodate to the event package processing times. The experimental results show that ASAP is capable of feeding an asynchronous event-by-event clustering algorithm in a responsive and efficient manner and at the same time prevents overflow

    On-Line RSSI-Range Model Learning for Target Localization and Tracking

    Get PDF
    This article belongs to the Special Issue QoS in Wireless Sensor/Actuator Networks and Systems: http://www.mdpi.com/journal/jsan/special_issues/QoS_netw_systThe interactions of Received Signal Strength Indicator (RSSI) with the environment are very difficult to be modeled, inducing significant errors in RSSI-range models and highly disturbing target localization and tracking methods. Some techniques adopt a training-based approach in which they off-line learn the RSSI-range characteristics of the environment in a prior training phase. However, the training phase is a time-consuming process and must be repeated in case of changes in the environment, constraining flexibility and adaptability. This paper presents schemes in which each anchor node on-line learns its RSSI-range models adapted to the particularities of its environment and then uses its trained model for target localization and tracking. Two methods are presented. The first uses the information of the location of anchor nodes to dynamically adapt the RSSI-range model. In the second one, each anchor node uses estimates of the target location –anchor nodes are assumed equipped with cameras—to on-line adapt its RSSI-range model. The paper presents both methods, describes their operation integrated in localization and tracking schemes and experimentally evaluates their performance in the UBILOC testbedUnión Europea EU Project MULTIDRONE H2020-ICT-2016-2017/H2020-ICT-2016-1Unión Europea EU Project AEROARMS H2020-ICT-2014-1-644271AEROMAIN Spanish R&D plan DPI2014-59383-C2-1-RUnión Europea EU Project AEROBI H2020-ICT-2015-1-68738

    Computer vision techniques for forest fire perception

    Get PDF
    This paper presents computer vision techniques for forest fire perception involving measurement of forest fire properties (fire front, flame height, flame inclination angle, fire base width) required for the implementation of advanced forest fire-fighting strategies. The system computes a 3D perception model of the fire and could also be used for visualizing the fire evolution in remote computer systems. The presented system integrates the processing of images from visual and infrared cameras. It applies sensor fusion techniques involving also telemetry sensors, and GPS. The paper also includes some results of forest fire experiments.European Commission EVG1-CT-2001-00043European Commission IST-2001-34304Ministerio de Educación y Ciencia DPI2005-0229

    A WSN-Based Tool for Urban and Industrial Fire-Fighting

    Get PDF
    This paper describes a WSN tool to increase safety in urban and industrial fire-fighting activities. Unlike most approaches, we assume that there is no preexisting WSN in the building, which involves interesting advantages but imposes some constraints. The system integrates the following functionalities: fire monitoring, firefighter monitoring and dynamic escape path guiding. It also includes a robust localization method that employs RSSI-range models dynamically trained to cope with the peculiarities of the environment. The training and application stages of the method are applied simultaneously, resulting in significant adaptability. Besides simulations and laboratory tests, a prototype of the proposed system has been validated in close-to-operational conditions.Unión Europea INFSO-ICT-22405

    Ten years of cooperation between mobile robots and sensor networks

    Get PDF
    This paper presents an overview of the work carried out by the Group of Robotics, Vision and Control (GRVC) at the University of Seville on the cooperation between mobile robots and sensor networks. The GRVC, led by Professor Anibal Ollero, has been working over the last ten years on techniques where robots and sensor networks exploit synergies and collaborate tightly, developing numerous research projects on the topic. In this paper, based on our research, we introduce what we consider some relevant challenges when combining sensor networks with mobile robots. Then, we describe our developed techniques and main results for these challenges. In particular, the paper focuses on autonomous self-deployment of sensor networks; cooperative localization and tracking; self-localization and mapping; and large-scale scenarios. Extensive experimental results and lessons learnt are also discussed in the paper

    Combining Unmanned Aerial Systems and Sensor Networks for Earth Observation

    Get PDF
    The combination of remote sensing and sensor network technologies can provide unprecedented earth observation capabilities, and has attracted high R&D interest in recent years. However, the procedures and tools used for deployment, geo-referenciation and collection of logged measurements in the case of traditional environmental monitoring stations are not suitable when dealing with hundreds or thousands of sensor nodes deployed in an environment of tenths of hectares. This paper presents a scheme based on Unmanned Aerial Systems that intends to give a step forward in the use of sensor networks for environment observation. The presented scheme includes methods, tools and technologies to solve sensor node deployment, localization and collection of measurements. The presented scheme is scalable—it is suitable for medium–large environments with a high number of sensor nodes—and highly autonomous—it is operated with very low human intervention. This paper presents the scheme including its main components, techniques and technologies, and describes its implementation and evaluation in field experimentsMinisterio de Economía y Competitividad DPI2014-59383-C2-1-

    UAV human teleoperation using event-based and frame-based cameras

    Get PDF
    Teleoperation is a crucial aspect for human-robot interaction with unmanned aerial vehicles (UAVs) applications. Fast perception processing is required to ensure robustness, precision, and safety. Event cameras are neuromorphic sensors that provide low latency response, high dynamic range and low power consumption. Although classical image-based methods have been extensively used for human-robot interaction tasks, responsiveness is limited by their processing rates. This paper presents a human-robot teleoperation scheme for UAVs that exploits the advantages of both traditional and event cameras. The proposed scheme was tested in teleoperation missions where the pose of a multirotor robot is controlled in real time using human gestures detected from events.European Union (UE). H2020 2019-87147Consejo Europeo de Investigación 78824
    corecore